25 research outputs found

    A hybrid algorithm for Bayesian network structure learning with application to multi-label learning

    Get PDF
    We present a novel hybrid algorithm for Bayesian network structure learning, called H2PC. It first reconstructs the skeleton of a Bayesian network and then performs a Bayesian-scoring greedy hill-climbing search to orient the edges. The algorithm is based on divide-and-conquer constraint-based subroutines to learn the local structure around a target variable. We conduct two series of experimental comparisons of H2PC against Max-Min Hill-Climbing (MMHC), which is currently the most powerful state-of-the-art algorithm for Bayesian network structure learning. First, we use eight well-known Bayesian network benchmarks with various data sizes to assess the quality of the learned structure returned by the algorithms. Our extensive experiments show that H2PC outperforms MMHC in terms of goodness of fit to new data and quality of the network structure with respect to the true dependence structure of the data. Second, we investigate H2PC's ability to solve the multi-label learning problem. We provide theoretical results to characterize and identify graphically the so-called minimal label powersets that appear as irreducible factors in the joint distribution under the faithfulness condition. The multi-label learning problem is then decomposed into a series of multi-class classification problems, where each multi-class variable encodes a label powerset. H2PC is shown to compare favorably to MMHC in terms of global classification accuracy over ten multi-label data sets covering different application domains. Overall, our experiments support the conclusions that local structural learning with H2PC in the form of local neighborhood induction is a theoretically well-motivated and empirically effective learning framework that is well suited to multi-label learning. The source code (in R) of H2PC as well as all data sets used for the empirical tests are publicly available.Comment: arXiv admin note: text overlap with arXiv:1101.5184 by other author

    Exact Combinatorial Optimization with Graph Convolutional Neural Networks

    Full text link
    Combinatorial optimization problems are typically tackled by the branch-and-bound paradigm. We propose a new graph convolutional neural network model for learning branch-and-bound variable selection policies, which leverages the natural variable-constraint bipartite graph representation of mixed-integer linear programs. We train our model via imitation learning from the strong branching expert rule, and demonstrate on a series of hard problems that our approach produces policies that improve upon state-of-the-art machine-learning methods for branching and generalize to instances significantly larger than seen during training. Moreover, we improve for the first time over expert-designed branching rules implemented in a state-of-the-art solver on large problems. Code for reproducing all the experiments can be found at https://github.com/ds4dm/learn2branch.Comment: Accepted paper at the NeurIPS 2019 conferenc

    Hybrid Models for Learning to Branch

    Full text link
    A recent Graph Neural Network (GNN) approach for learning to branch has been shown to successfully reduce the running time of branch-and-bound algorithms for Mixed Integer Linear Programming (MILP). While the GNN relies on a GPU for inference, MILP solvers are purely CPU-based. This severely limits its application as many practitioners may not have access to high-end GPUs. In this work, we ask two key questions. First, in a more realistic setting where only a CPU is available, is the GNN model still competitive? Second, can we devise an alternate computationally inexpensive model that retains the predictive power of the GNN architecture? We answer the first question in the negative, and address the second question by proposing a new hybrid architecture for efficient branching on CPU machines. The proposed architecture combines the expressive power of GNNs with computationally inexpensive multi-linear perceptrons (MLP) for branching. We evaluate our methods on four classes of MILP problems, and show that they lead to up to 26% reduction in solver running time compared to state-of-the-art methods without a GPU, while extrapolating to harder problems than it was trained on.Comment: Preprint. Under revie

    Causal Reinforcement Learning using Observational and Interventional Data

    Get PDF
    Learning efficiently a causal model of the environment is a key challenge of model-based RL agents operating in POMDPs. We consider here a scenario where the learning agent has the ability to collect online experiences through direct interactions with the environment (interventional data), but has also access to a large collection of offline experiences, obtained by observing another agent interacting with the environment (observational data). A key ingredient, that makes this situation non-trivial, is that we allow the observed agent to interact with the environment based on hidden information, which is not observed by the learning agent. We then ask the following questions: can the online and offline experiences be safely combined for learning a causal model ? And can we expect the offline experiences to improve the agent's performances ? To answer these questions, we import ideas from the well-established causal framework of do-calculus, and we express model-based reinforcement learning as a causal inference problem. Then, we propose a general yet simple methodology for leveraging offline data during learning. In a nutshell, the method relies on learning a latent-based causal transition model that explains both the interventional and observational regimes, and then using the recovered latent variable to infer the standard POMDP transition model via deconfounding. We prove our method is correct and efficient in the sense that it attains better generalization guarantees due to the offline data (in the asymptotic case), and we illustrate its effectiveness empirically on synthetic toy problems. Our contribution aims at bridging the gap between the fields of reinforcement learning and causality

    Apprentissage de Structure de Modèles Graphiques Probabilistes : application à la Classification Multi-Label

    No full text
    In this thesis, we address the specific problem of probabilistic graphical model structure learning, that is, finding the most efficient structure to represent a probability distribution, given only a sample set D ∼ p(v). In the first part, we review the main families of probabilistic graphical models from the literature, from the most common (directed, undirected) to the most advanced ones (chained, mixed etc.). Then we study particularly the problem of learning the structure of directed graphs (Bayesian networks), and we propose a new hybrid structure learning method, H2PC (Hybrid Hybrid Parents and Children), which combines a constraint-based approach (statistical independence tests) with a score-based approach (posterior probability of the structure). In the second part, we address the multi-label classification problem, which aims at assigning a set of categories (binary vector y P (0, 1)m) to a given object (vector x P Rd). In this context, probabilistic graphical models provide convenient means of encoding p(y|x), particularly for the purpose of minimizing general loss functions. We review the main approaches based on PGMs for multi-label classification (Probabilistic Classifier Chain, Conditional Dependency Network, Bayesian Network Classifier, Conditional Random Field, Sum-Product Network), and propose a generic approach inspired from constraint-based structure learning methods to identify the unique partition of the label set into irreducible label factors (ILFs), that is, the irreducible factorization of p(y|x) into disjoint marginal distributions. We establish several theoretical results to characterize the ILFs based on the compositional graphoid axioms, and obtain three generic procedures under various assumptions about the conditional independence properties of the joint distribution p(x, y). Our conclusions are supported by carefully designed multi-label classification experiments, under the F-loss and the zero-one loss functionsDans cette thèse, nous nous intéressons au problème spécifique de l'apprentissage de structure de modèles graphiques probabilistes, c'est-à-dire trouver la structure la plus efficace pour représenter une distribution, à partir seulement d'un ensemble d'échantillons D ∼ p(v). Dans une première partie, nous passons en revue les principaux modèles graphiques probabilistes de la littérature, des plus classiques (modèles dirigés, non-dirigés) aux plus avancés (modèles mixtes, cycliques etc.). Puis nous étudions particulièrement le problème d'apprentissage de structure de modèles dirigés (réseaux Bayésiens), et proposons une nouvelle méthode hybride pour l'apprentissage de structure, H2PC (Hybrid Hybrid Parents and Children), mêlant une approche à base de contraintes (tests statistiques d'indépendance) et une approche à base de score (probabilité postérieure de la structure). Dans un second temps, nous étudions le problème de la classification multi-label, visant à prédire un ensemble de catégories (vecteur binaire y P (0, 1)m) pour un objet (vecteur x P Rd). Dans ce contexte, l'utilisation de modèles graphiques probabilistes pour représenter la distribution conditionnelle des catégories prend tout son sens, particulièrement dans le but minimiser une fonction coût complexe. Nous passons en revue les principales approches utilisant un modèle graphique probabiliste pour la classification multi-label (Probabilistic Classifier Chain, Conditional Dependency Network, Bayesian Network Classifier, Conditional Random Field, Sum-Product Network), puis nous proposons une approche générique visant à identifier une factorisation de p(y|x) en distributions marginales disjointes, en s'inspirant des méthodes d'apprentissage de structure à base de contraintes. Nous démontrons plusieurs résultats théoriques, notamment l'unicité d'une décomposition minimale, ainsi que trois procédures quadratiques sous diverses hypothèses à propos de la distribution jointe p(x, y). Enfin, nous mettons en pratique ces résultats afin d'améliorer la classification multi-label avec les fonctions coût F-loss et zero-one los

    Probabilistic Graphical Model Structure Learning : Application to Multi-Label Classification

    No full text
    Dans cette thèse, nous nous intéressons au problème spécifique de l'apprentissage de structure de modèles graphiques probabilistes, c'est-à-dire trouver la structure la plus efficace pour représenter une distribution, à partir seulement d'un ensemble d'échantillons D ∼ p(v). Dans une première partie, nous passons en revue les principaux modèles graphiques probabilistes de la littérature, des plus classiques (modèles dirigés, non-dirigés) aux plus avancés (modèles mixtes, cycliques etc.). Puis nous étudions particulièrement le problème d'apprentissage de structure de modèles dirigés (réseaux Bayésiens), et proposons une nouvelle méthode hybride pour l'apprentissage de structure, H2PC (Hybrid Hybrid Parents and Children), mêlant une approche à base de contraintes (tests statistiques d'indépendance) et une approche à base de score (probabilité postérieure de la structure). Dans un second temps, nous étudions le problème de la classification multi-label, visant à prédire un ensemble de catégories (vecteur binaire y P (0, 1)m) pour un objet (vecteur x P Rd). Dans ce contexte, l'utilisation de modèles graphiques probabilistes pour représenter la distribution conditionnelle des catégories prend tout son sens, particulièrement dans le but minimiser une fonction coût complexe. Nous passons en revue les principales approches utilisant un modèle graphique probabiliste pour la classification multi-label (Probabilistic Classifier Chain, Conditional Dependency Network, Bayesian Network Classifier, Conditional Random Field, Sum-Product Network), puis nous proposons une approche générique visant à identifier une factorisation de p(y|x) en distributions marginales disjointes, en s'inspirant des méthodes d'apprentissage de structure à base de contraintes. Nous démontrons plusieurs résultats théoriques, notamment l'unicité d'une décomposition minimale, ainsi que trois procédures quadratiques sous diverses hypothèses à propos de la distribution jointe p(x, y). Enfin, nous mettons en pratique ces résultats afin d'améliorer la classification multi-label avec les fonctions coût F-loss et zero-one lossIn this thesis, we address the specific problem of probabilistic graphical model structure learning, that is, finding the most efficient structure to represent a probability distribution, given only a sample set D ∼ p(v). In the first part, we review the main families of probabilistic graphical models from the literature, from the most common (directed, undirected) to the most advanced ones (chained, mixed etc.). Then we study particularly the problem of learning the structure of directed graphs (Bayesian networks), and we propose a new hybrid structure learning method, H2PC (Hybrid Hybrid Parents and Children), which combines a constraint-based approach (statistical independence tests) with a score-based approach (posterior probability of the structure). In the second part, we address the multi-label classification problem, which aims at assigning a set of categories (binary vector y P (0, 1)m) to a given object (vector x P Rd). In this context, probabilistic graphical models provide convenient means of encoding p(y|x), particularly for the purpose of minimizing general loss functions. We review the main approaches based on PGMs for multi-label classification (Probabilistic Classifier Chain, Conditional Dependency Network, Bayesian Network Classifier, Conditional Random Field, Sum-Product Network), and propose a generic approach inspired from constraint-based structure learning methods to identify the unique partition of the label set into irreducible label factors (ILFs), that is, the irreducible factorization of p(y|x) into disjoint marginal distributions. We establish several theoretical results to characterize the ILFs based on the compositional graphoid axioms, and obtain three generic procedures under various assumptions about the conditional independence properties of the joint distribution p(x, y). Our conclusions are supported by carefully designed multi-label classification experiments, under the F-loss and the zero-one loss function

    Apprentissage de Structure de Modèles Graphiques Probabilistes : application à la Classification Multi-Label

    No full text
    In this thesis, we address the specific problem of probabilistic graphical model structure learning, that is, finding the most efficient structure to represent a probability distribution, given only a sample set D ∼ p(v). In the first part, we review the main families of probabilistic graphical models from the literature, from the most common (directed, undirected) to the most advanced ones (chained, mixed etc.). Then we study particularly the problem of learning the structure of directed graphs (Bayesian networks), and we propose a new hybrid structure learning method, H2PC (Hybrid Hybrid Parents and Children), which combines a constraint-based approach (statistical independence tests) with a score-based approach (posterior probability of the structure). In the second part, we address the multi-label classification problem, which aims at assigning a set of categories (binary vector y P (0, 1)m) to a given object (vector x P Rd). In this context, probabilistic graphical models provide convenient means of encoding p(y|x), particularly for the purpose of minimizing general loss functions. We review the main approaches based on PGMs for multi-label classification (Probabilistic Classifier Chain, Conditional Dependency Network, Bayesian Network Classifier, Conditional Random Field, Sum-Product Network), and propose a generic approach inspired from constraint-based structure learning methods to identify the unique partition of the label set into irreducible label factors (ILFs), that is, the irreducible factorization of p(y|x) into disjoint marginal distributions. We establish several theoretical results to characterize the ILFs based on the compositional graphoid axioms, and obtain three generic procedures under various assumptions about the conditional independence properties of the joint distribution p(x, y). Our conclusions are supported by carefully designed multi-label classification experiments, under the F-loss and the zero-one loss functionsDans cette thèse, nous nous intéressons au problème spécifique de l'apprentissage de structure de modèles graphiques probabilistes, c'est-à-dire trouver la structure la plus efficace pour représenter une distribution, à partir seulement d'un ensemble d'échantillons D ∼ p(v). Dans une première partie, nous passons en revue les principaux modèles graphiques probabilistes de la littérature, des plus classiques (modèles dirigés, non-dirigés) aux plus avancés (modèles mixtes, cycliques etc.). Puis nous étudions particulièrement le problème d'apprentissage de structure de modèles dirigés (réseaux Bayésiens), et proposons une nouvelle méthode hybride pour l'apprentissage de structure, H2PC (Hybrid Hybrid Parents and Children), mêlant une approche à base de contraintes (tests statistiques d'indépendance) et une approche à base de score (probabilité postérieure de la structure). Dans un second temps, nous étudions le problème de la classification multi-label, visant à prédire un ensemble de catégories (vecteur binaire y P (0, 1)m) pour un objet (vecteur x P Rd). Dans ce contexte, l'utilisation de modèles graphiques probabilistes pour représenter la distribution conditionnelle des catégories prend tout son sens, particulièrement dans le but minimiser une fonction coût complexe. Nous passons en revue les principales approches utilisant un modèle graphique probabiliste pour la classification multi-label (Probabilistic Classifier Chain, Conditional Dependency Network, Bayesian Network Classifier, Conditional Random Field, Sum-Product Network), puis nous proposons une approche générique visant à identifier une factorisation de p(y|x) en distributions marginales disjointes, en s'inspirant des méthodes d'apprentissage de structure à base de contraintes. Nous démontrons plusieurs résultats théoriques, notamment l'unicité d'une décomposition minimale, ainsi que trois procédures quadratiques sous diverses hypothèses à propos de la distribution jointe p(x, y). Enfin, nous mettons en pratique ces résultats afin d'améliorer la classification multi-label avec les fonctions coût F-loss et zero-one los
    corecore